80 research outputs found

    Categorising the sub-mJy population: Star-forming galaxies from deep radio surveys

    Get PDF
    Models predict that starforming galaxies make up the majority of the source population detected in the very deepest radio surveys. Radio selected samples of starforming galaxies are therefore a potentially excellent method to chart e.g. the cosmic history of star-formation. However, a significant minority of the faintest radio sources are AGN powered ‘contaminants’, and must be removed from any solely star-formation powered sample. Here we describe a multi-pronged method for spearating star-forming and AGN powered sources in a deep 1.4 GHz radio survey. We utilise a wealth of multi-wavelength information, including radio spectral and morphological information and radio to mid-IR SED modelling, to select a clean sample of star-formation powered sources. We then derive the 1.4 GHz source counts separately for AGN and SFGs, calculate an independent measure of the evolving star-formation rate density to z∼2, and compare our results to the star-formation rate density determined at other wavelengths

    2017 Robotic Instrument Segmentation Challenge

    Get PDF
    In mainstream computer vision and machine learning, public datasets such as ImageNet, COCO and KITTI have helped drive enormous improvements by enabling researchers to understand the strengths and limitations of different algorithms via performance comparison. However, this type of approach has had limited translation to problems in robotic assisted surgery as this field has never established the same level of common datasets and benchmarking methods. In 2015 a sub-challenge was introduced at the EndoVis workshop where a set of robotic images were provided with automatically generated annotations from robot forward kinematics. However, there were issues with this dataset due to the limited background variation, lack of complex motion and inaccuracies in the annotation. In this work we present the results of the 2017 challenge on robotic instrument segmentation which involved 10 teams participating in binary, parts and type based segmentation of articulated da Vinci robotic instruments

    Categorising the sub-mJy population: Star-forming galaxies from deep radio surveys

    Get PDF
    Models predict that starforming galaxies make up the majority of the source population detected in the very deepest radio surveys. Radio selected samples of starforming galaxies are therefore a potentially excellent method to chart e.g. the cosmic history of star-formation. However, a significant minority of the faintest radio sources are AGN powered ‘contaminants’, and must be removed from any solely star-formation powered sample. Here we describe a multi-pronged method for spearating star-forming and AGN powered sources in a deep 1.4 GHz radio survey. We utilise a wealth of multi-wavelength information, including radio spectral and morphological information and radio to mid-IR SED modelling, to select a clean sample of star-formation powered sources. We then derive the 1.4 GHz source counts separately for AGN and SFGs, calculate an independent measure of the evolving star-formation rate density to z∼2, and compare our results to the star-formation rate density determined at other wavelengths

    Circumstellar Structure around Evolved Stars in the Cygnus-X Star Formation Region

    Get PDF
    We present observations of newly discovered 24 micron circumstellar structures detected with the Multiband Imaging Photometer for Spitzer (MIPS) around three evolved stars in the Cygnus-X star forming region. One of the objects, BD+43 3710, has a bipolar nebula, possibly due to an outflow or a torus of material. A second, HBHA 4202-22, a Wolf-Rayet candidate, shows a circular shell of 24 micron emission suggestive of either a limb-brightened shell or disk seen face-on. No diffuse emission was detected around either of these two objects in the Spitzer 3.6-8 micron Infrared Array Camera (IRAC) bands. The third object is the luminous blue variable candidate G79.29+0.46. We resolved the previously known inner ring in all four IRAC bands. The 24 micron emission from the inner ring extends ~1.2 arcmin beyond the shorter wavelength emission, well beyond what can be attributed to the difference in resolutions between MIPS and IRAC. Additionally, we have discovered an outer ring of 24 micron emission, possibly due to an earlier episode of mass loss. For the two shell stars, we present the results of radiative transfer models, constraining the stellar and dust shell parameters. The shells are composed of amorphous carbon grains, plus polycyclic aromatic hydrocarbons in the case of G79.29+0.46. Both G79.29+0.46 and HBHA 4202-22 lie behind the main Cygnus-X cloud. Although G79.29+0.46 may simply be on the far side of the cloud, HBHA 4202-22 is unrelated to the Cygnus-X star formation region.Comment: Accepted by A

    NVIDIA FLARE: Federated Learning from Simulation to Real-World

    Full text link
    Federated learning (FL) enables building robust and generalizable AI models by leveraging diverse datasets from multiple collaborators without centralizing the data. We created NVIDIA FLARE as an open-source software development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications. The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches, which facilitate building workflows for distributed learning across enterprises and enable platform developers to create a secure, privacy-preserving offering for multiparty collaboration utilizing homomorphic encryption or differential privacy. The SDK is a lightweight, flexible, and scalable Python package. It allows researchers to apply their data science workflows in any training libraries (PyTorch, TensorFlow, XGBoost, or even NumPy) in real-world FL settings. This paper introduces the key design principles of NVFlare and illustrates some use cases (e.g., COVID analysis) with customizable FL workflows that implement different privacy-preserving algorithms. Code is available at https://github.com/NVIDIA/NVFlare.Comment: Accepted at the International Workshop on Federated Learning, NeurIPS 2022, New Orleans, USA (https://federated-learning.org/fl-neurips-2022); Revised version v2: added Key Components list, system metrics for homomorphic encryption experiment; Extended v3 for journal submissio

    CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation

    Full text link
    Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA. The challenge's goal is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are performed using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore, we created an unsupervised cross-modality segmentation benchmark. The training set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 as provided in the testing set (N=137). A total of 16 teams submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice - VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice - VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.Comment: Submitted to Medical Image Analysi
    • …
    corecore